04 Spark essentials


In [ ]:
# Make it Python2 & Python3 compatible
from __future__ import print_function
import sys
if sys.version[0] == 3:
    xrange = range

Spark context

The notebook deployment includes Spark automatically within each Python notebook kernel. This means that, upon kernel instantiation, there is an SparkContext object called sc immediatelly available in the Notebook, as in a PySpark shell. Let's take a look at it:


In [ ]:
?sc

We can inspect some of the SparkContext properties:


In [ ]:
# Spark version we are using
print( sc.version )

In [ ]:
# Name of the application we are running
print(sc.appName)

In [ ]:
sc.appName

In [ ]:
# Some configuration variables
print( sc.defaultParallelism )
print( sc.defaultMinPartitions )

In [ ]:
# Username running all Spark processes
# --> Note this is a method, not a property
print( sc.sparkUser() )

Spark configuration


In [ ]:
# Print out the SparkContext configuration
print( sc._conf.toDebugString() )

In [ ]:
# Another way to get similar information
from pyspark import SparkConf, SparkContext
SparkConf().getAll()

Spark execution modes

We can also take a look at the Spark configuration this kernel is running under, by using the above configuration data:


In [ ]:
print( sc._conf.toDebugString() )

... this includes the execution mode for Spark. The default mode is local, i.e. all Spark processes run locally in the launched Virtual Machine. This is fine for developing and testing with small datasets.

But to run Spark applications on bigger datasets, they must be executed in a remote cluster. This deployment comes with configuration modes for that, which require:

  • network adjustments to make the VM "visible" from the cluster: the virtual machine must be started in bridged mode (the default Vagrantfile already contains code for doingso, but it must be uncommented)
  • configuring the addresses for the cluster. This is done within the VM by using the spark-notebook script, such as
    sudo service spark-notebook set-addr <master-ip> <namenode-ip> <historyserver-ip>
  • activating the desired mode, by executing
    sudo service spark-notebook set-mode (local | standalone | yarn)

These operations can also be performed outside the VM by telling vagrant to relay them, e.g.

vagrant ssh -c "sudo service spark-notebook set-mode local"

A trivial test

Let's do a trivial operation that creates an RDD and executes an action on it. So that we can test that the kernel is capable of launching executors


In [ ]:
from operator import add

l = sc.range(10000)
print( l.reduce(add) )

In [ ]: